Team, Visitors, External Collaborators
Overall Objectives
Research Program
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: Partnerships and Cooperations

European Initiatives

FP7 & H2020 Projects

D3: Drawing Interpretation for 3D Design

Participants : Yulia Gryaditskaya, Tibor Stanko, Bastien Wailly, David Jourdan, Adrien Bousseau, Felix Hähnlein.

Line drawing is a fundamental tool for designers to quickly visualize 3D concepts. The goal of this ERC project is to develop algorithms capable of understanding design drawings. The first 30 months of the project allowed us to make significant progress in our understanding of how designers draw, and to propose preliminary solutions to the challenge of reconstructing 3D shapes from design drawings.

To better understand design sketching, we have collected a dataset of more than 400 professional design sketches [17]. We manually labeled the drawing techniques used in each sketch, and we registered all sketches to reference 3D models. Analyzing this data revealed systematic strategies employed by designers to convey 3D shapes, which will inspire the development of novel algorithms for drawing interpretation. In addition, our annotated sketches and associated 3D models form a challenging benchmark to test existing methods.

We proposed several methods to recover 3D information from drawings. A first family of method employs deep learning to predict what 3D shape is represented in a drawing. We applied this strategy in the context of architectural design, where we reconstruct 3D building by recognizing their constituent components (building mass, facade, window). We also presented an interactive system that allows users to create 3D objects by drawing from multiple viewpoints [14]. The second family of methods leverages geometric properties of the lines drawn to optimize the 3D reconstruction. In particular, we exploited properties of developable surfaces to reconstruct sketches of fashion items.

A long-term goal of our research is to evaluate the physical validity of a concept directly from a drawing. We obtained promising results towards this goal for the particular case of mechanical objects. We proposed an interactive system where users design the shape and motion of an articulated object, and our method automatically synthesize a mechanism that animates the object while avoiding collisions [18]. The geometry synthesized by our method is ready to be fabricated for rapid prototyping.

ERC FunGraph

Participants : George Drettakis, Thomas Leimkühler, Sébastien Morgenthaler, Rada Deeb, Stavros Diolatzis, Siddhant Prakash, Simon Rodriguez, Julien Philip.

The ERC Advanced Grant FunGraph proposes a new methodology by introducing the concepts of rendering and input uncertainty. We define output or rendering uncertainty as the expected error of a rendering solution over the parameters and algorithmic components used with respect to an ideal image, and input uncertainty as the expected error of the content over the different parameters involved in its generation, compared to an ideal scene being represented. Here the ideal scene is a perfectly accurate model of the real world, i.e., its geometry, materials and lights; the ideal image is an infinite resolution, high-dynamic range image of this scene.

By introducing methods to estimate rendering uncertainty we will quantify the expected error of previously incompatible rendering components with a unique methodology for accurate, approximate and image-based renderers. This will allow FunGraph to define unified rendering algorithms that can exploit the advantages of these very different approaches in a single algorithmic framework, providing a fundamentally different approach to rendering. A key component of these solutions is the use of captured content: we will develop methods to estimate input uncertainty and to propagate it to the unified rendering algorithms, allowing this content to be exploited by all rendering approaches.

The goal of FunGraph is to fundamentally transform computer graphics rendering, by providing a solid theoretical framework based on uncertainty to develop a new generation of rendering algorithms. These algorithms will fully exploit the spectacular – but previously disparate and disjoint – advances in rendering, and benefit from the enormous wealth offered by constantly improving captured input content.

Emotive

Participants : Julien Philip, Sebastiàn Vizcay, George Drettakis.

https://emotiveproject.eu/